25 research outputs found

    Working with infoscience is easy!

    Get PDF
    This is an article to show how it is easy to work with infoscience and make a research reproducible

    Learning Network Structures from Firing Patterns

    Get PDF
    How can we decipher the hidden structure of a network based on limited observations? This question arises in many scenarios ranging from social to wireless and to neural networks. In such settings, we typically observe the nodes’ behaviors (e.g., the time a node learns about a piece of information, or the time a node gets infected by a disease), and we are interested in inferring the true network over which the diffusion takes place. In this paper, we consider this problem over a neural network where our aim is to reconstruct the connectivity between neurons merely by observing their firing activity. We develop an iterative NEUral INFerence algorithm NEUINF to identify the type of effective neural connections (i.e. excitatory/inhibitory) based on the Perceptron learning rule. We provide theoretical bounds on the average performance of NEUINF as well as numerical analysis to compare the performance of the proposed approach to some previous art

    Asynchronous Decoding of LDPC Codes over BEC

    Get PDF
    LDPC codes are typically decoded by running a synchronous message passing algorithm over the corresponding bipartite factor graph (made of variable and check nodes). More specifically, each synchronous round consists of 1) updating all variable nodes based on the information received from the check nodes in the previous round, and then 2) updating all the check nodes based on the information sent from variable nodes in the current round. However, in many applications, ranging from message passing in neural networks to hardware implementation of LDPC codes, assuming that all messages are sent and received at the same time is far from realistic. In this paper, we investigate the effect of asynchronous message passing on the decoding of LDPC codes over a BEC channel. We effectively assume that there is a random delay assigned to each edge of the factor graph that models the random propagation delay of a message along the edge. As a result, the output messages of a check/variable node are also asynchronously updated upon arrival of a new message in its input. We show, for the first time, that the asymptotic performance of the asynchronous message passing is fully characterized by a fixed point integral equation that takes into account both the temporal and the spatial feature of the factor graph. Our theoretical result is reminiscent of the fixed point equation in traditional BP decoding. Surprisingly, our simulation results show that asynchronous scheduling outperforms tremendously the traditional BP in the finite block length regime by avoiding standard trapping sets

    Learning Neural Connectivity from Firing Activity: Scalable Algorithms with Provable Guarantees

    Get PDF
    The connectivity of a neuronal network has a major effect on its functionality and role. It is generally believed that the complex network structure of the brain provides a physiological basis for information processing. Therefore, identifying the network’s topology has received a lot of attentions in neuroscience and has been the center of many research initiatives such as Human Connectome Project. Nevertheless, direct and invasive approaches that slice and observe the neural tissue have proven to be time consuming, complex and costly. As a result, the inverse methods that utilize firing activity of neurons in order to identify the (functional) connections have gained momentum recently, especially in light of rapid advances in recording technologies; It will soon be possible to simultaneously monitor the activities of tens of thousands of neurons in real time. While there are a number of excellent approaches that aim to identify the functional connections from firing activities, the scalability of the proposed techniques plays a major challenge in applying them on large-scale datasets of recorded firing activities. In exceptional cases where scalability has not been an issue, the theoretical performance guarantees are usually limited to a specific family of neurons or the type of firing activities. In this paper, we formulate the neural network reconstruction as an instance of a graph learning problem, where we observe the behavior of nodes/neurons (i.e., firing activities) and aim to find the links/connections. We develop a scalable learning mechanism and derive the conditions under which the estimated graph for a network of Leaky Integrate and Fire (LIF) neurons matches the true underlying synaptic connections. We then validate the performance of the algorithm using artificially generated data (for benchmarking) and real data recorded from multiple hippocampal areas in rat

    Nonbinary Associative Memory With Exponential Pattern Retrieval Capacity and Iterative Learning

    Get PDF
    We consider the problem of neural association for a network of nonbinary neurons. Here, the task is to first memorize a set of patterns using a network of neurons whose states assume values from a finite number of integer levels. Later, the same network should be able to recall the previously memorized patterns from their noisy versions. Prior work in this area consider storing a finite number of purely random patterns, and have shown that the pattern retrieval capacities (maximum number of patterns that can be memorized) scale only linearly with the number of neurons in the network. In our formulation of the problem, we concentrate on exploiting redundancy and internal structure of the patterns to improve the pattern retrieval capacity. Our first result shows that if the given patterns have a suitable linear-algebraic structure, i.e., comprise a subspace of the set of all possible patterns, then the pattern retrieval capacity is exponential in terms of the number of neurons. The second result extends the previous finding to cases where the patterns have weak minor components, i.e., the smallest eigenvalues of the correlation matrix tend toward zero. We will use these minor components (or the basis vectors of the pattern null space) to increase both the pattern retrieval capacity and error correction capabilities. An iterative algorithm is proposed for the learning phase, and two simple algorithms are presented for the recall phase. Using analytical methods and simulations, we show that the proposed methods can tolerate a fair amount of errors in the input while being able to memorize an exponentially large number of patterns

    Exponential Pattern Retrieval Capacity with Non-Binary Associative Memory

    Get PDF
    We consider the problem of neural association for a network of non-binary neurons. Here, the task is to recall a previously memorized pattern from its noisy version using a network of neurons whose states assume values from a finite number of non-negative integer levels. Prior works in this area consider storing a finite number of purely random patterns, and have shown that the pattern retrieval capacities (maximum number of patterns that can be memorized) scale only linearly with the number of neurons in the network

    Molecular associative memory: An associative memory framework with exponential storage capacity for DNA computing

    Get PDF
    Associative memory problem: Find the closest stored vector (in Hamming distance) to a given query vector. There are different ways to implement an associative memory, including the neural networks and DNA strands. Using neural networks, connection weights are adjusted in order to perform association. Recall procedure is iterative and relies on simple neural operations. In this case, the design criteria is maximizing the number of stored patterns C while having some noise tolerance. The molecular implementation is based on synthesizing C DNA strands as stored vectors. Recall procedure is usually done in one shot via chemical reactions and relies on highly parallelism of DNA computing. Here, the design criteria: finding proper DNA sequences to minimize probability of error during the recall phase. Current molecular associative memories are either low in storage capacity, if implemented using molecular realizations of neural networks, or very complex to implement, if all the stored sequences have to be synthesized. We introduce an associative memory framework with exponential storage capacity based on transcriptional networks of DNA switches. The advantages of the proposed approach over current methods are: 1. Exponential storage capacities with current neural network-based approaches can not be achieved. 2. For other methods, although having exponential storage capacities is possible, it is very complex as it requires synthesizing an extraordinarily large number of DNA strands
    corecore